Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(9)2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38732905

ABSTRACT

High-pressure pipelines are critical for transporting hazardous materials over long distances, but they face threats from third-party interference activities. Preventive measures are implemented, but interference accidents can still occur, making the need for high-quality detection strategies vital. This paper proposes an end-to-end Artificial Intelligence of Things (AIoT) solution to detect potential interference threats in real time. The solution involves developing a smart visual sensor capable of processing images using state-of-the-art computer vision algorithms and transmitting alerts to pipeline operators in real time. The system's core is based on the object-detection model (e.g., You Only Look Once version 4 (YOLOv4) and DETR with Improved deNoising anchOr boxes (DINO)), trained on a custom Pipeline Visual Threat Assessment (Pipe-VisTA) dataset. Among the trained models, DINO was able to achieve the best Mean Average Precision (mAP) of 71.2% for the unseen test dataset. However, for the deployment on a limited computational-ability edge computer (i.e., the NVIDIA Jetson Nano), the simpler and TensorRT-optimized YOLOv4 model was used, which achieved a mAP of 61.8% for the test dataset. The developed AIoT device captures the image using a camera, processes on the edge using the trained YOLOv4 model to detect the potential threat, transmits the threat alert to a Fleet Portal via LoRaWAN, and hosts the alert on a dashboard via a satellite network. The device has been fully tested in the field to ensure its functionality prior to deployment for the SEA Gas use-case. The AIoT smart solution has been deployed across the 10km stretch of the SEA Gas pipeline across the Murray Bridge section. In total, 48 AIoT devices and three Fleet Portals are installed to ensure the line-of-sight communication between the devices and portals.

2.
Sci Rep ; 14(1): 8734, 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38627460

ABSTRACT

This research aimed to determine whether accomplished surfers could accurately perceive how changes to surfboard fin design affected their surfing performance. Four different surfboard fins, including conventional, single-grooved, and double-grooved fins, were developed using computer-aided design combined with additive manufacturing (3D printing). We systematically installed these 3D-printed fins into instrumented surfboards, which six accomplished surfers rode on waves in the ocean in a random order while blinded to the fin condition. We quantified the surfers' wave-riding performance during each surfing bout using a sport-specific tracking device embedded in each instrumented surfboard. After each fin condition, the surfers rated their perceptions of the Drive, Feel, Hold, Speed, Stiffness, and Turnability they experienced while performing turns using a visual analogue scale. Relationships between the surfer's perceptions of the fins and their surfing performance data collected from the tracking devices were then examined. The results revealed that participants preferred the single-grooved fins for Speed and Feel, followed by double-grooved fins, commercially available fins, and conventional fins without grooves. Crucially, the surfers' perceptions of their performance matched the objective data from the embedded sensors. Our findings demonstrate that accomplished surfers can perceive how changes to surfboard fins influence their surfing performance.

3.
Sensors (Basel) ; 24(4)2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38400222

ABSTRACT

Vegetation in East Antarctica, such as moss and lichen, vulnerable to the effects of climate change and ozone depletion, requires robust non-invasive methods to monitor its health condition. Despite the increasing use of unmanned aerial vehicles (UAVs) to acquire high-resolution data for vegetation analysis in Antarctic regions through artificial intelligence (AI) techniques, the use of multispectral imagery and deep learning (DL) is quite limited. This study addresses this gap with two pivotal contributions: (1) it underscores the potential of deep learning (DL) in a field with notably limited implementations for these datasets; and (2) it introduces an innovative workflow that compares the performance between two supervised machine learning (ML) classifiers: Extreme Gradient Boosting (XGBoost) and U-Net. The proposed workflow is validated by detecting and mapping moss and lichen using data collected in the highly biodiverse Antarctic Specially Protected Area (ASPA) 135, situated near Casey Station, between January and February 2023. The implemented ML models were trained against five classes: Healthy Moss, Stressed Moss, Moribund Moss, Lichen, and Non-vegetated. In the development of the U-Net model, two methods were applied: Method (1) which utilised the original labelled data as those used for XGBoost; and Method (2) which incorporated XGBoost predictions as additional input to that version of U-Net. Results indicate that XGBoost demonstrated robust performance, exceeding 85% in key metrics such as precision, recall, and F1-score. The workflow suggested enhanced accuracy in the classification outputs for U-Net, as Method 2 demonstrated a substantial increase in precision, recall and F1-score compared to Method 1, with notable improvements such as precision for Healthy Moss (Method 2: 94% vs. Method 1: 74%) and recall for Stressed Moss (Method 2: 86% vs. Method 1: 69%). These findings contribute to advancing non-invasive monitoring techniques for the delicate Antarctic ecosystems, showcasing the potential of UAVs, high-resolution multispectral imagery, and ML models in remote sensing applications.


Subject(s)
Artificial Intelligence , Remote Sensing Technology , Remote Sensing Technology/methods , Ecosystem , Unmanned Aerial Devices , Antarctic Regions
4.
Sensors (Basel) ; 22(22)2022 Nov 09.
Article in English | MEDLINE | ID: mdl-36433251

ABSTRACT

With the increase of large camera networks around us, it is becoming more difficult to manually identify vehicles. Computer vision enables us to automate this task. More specifically, vehicle re-identification (ReID) aims to identify cars in a camera network with non-overlapping views. Images captured of vehicles can undergo intense variations of appearance due to illumination, pose, or viewpoint. Furthermore, due to small inter-class similarities and large intra-class differences, feature learning is often enhanced with non-visual cues, such as the topology of camera networks and temporal information. These are, however, not always available or can be resource intensive for the model. Following the success of Transformer baselines in ReID, we propose for the first time an outlook-attention-based vehicle ReID framework using the Vision Outlooker as its backbone, which is able to encode finer-level features. We show that, without embedding any additional side information and using only the visual cues, we can achieve an 80.31% mAP and 97.13% R-1 on the VeRi-776 dataset. Besides documenting our research, this paper also aims to provide a comprehensive walkthrough of vehicle ReID. We aim to provide a starting point for individuals and organisations, as it is difficult to navigate through the myriad of complex research in this field.


Subject(s)
Artificial Intelligence , Motor Vehicles , Humans
5.
Sensors (Basel) ; 22(20)2022 Oct 14.
Article in English | MEDLINE | ID: mdl-36298170

ABSTRACT

The increased global waste generation rates over the last few decades have made the waste management task a significant problem. One of the potential approaches adopted globally is to recycle a significant portion of generated waste. However, the contamination of recyclable waste has been a major problem in this context and causes almost 75% of recyclable waste to be unusable. For sustainable development, efficient management and recycling of waste are of huge importance. To reduce the waste contamination rates, conventionally, a manual bin-tagging approach is adopted; however, this is inefficient and requires huge labor effort. Within household waste contamination, plastic bags have been found to be one of the main contaminants. Towards automating the process of plastic-bag contamination detection, this paper proposes an edge-computing video analytics solution using the latest Artificial Intelligence (AI), Artificial Intelligence of Things (AIoT) and computer vision technologies. The proposed system is based on the idea of capturing video of waste from the truck hopper, processing it using edge-computing hardware to detect plastic-bag contamination and storing the contamination-related information for further analysis. Faster R-CNN and You Only Look Once version 4 (YOLOv4) deep learning model variants are trained using the Remondis Contamination Dataset (RCD) developed from Remondis manual tagging historical records. The overall system was evaluated in terms of software and hardware performance using standard evaluation measures (i.e., training performance, testing performance, Frames Per Second (FPS), system usage, power consumption). From the detailed analysis, YOLOv4 with CSPDarkNet_tiny was identified as a suitable candidate with a Mean Average Precision (mAP) of 63% and FPS of 24.8 with NVIDIA Jetson TX2 hardware. The data collected from the deployment of edge-computing hardware on waste collection trucks was used to retrain the models and improved performance in terms of mAP, False Positives (FPs), False Negatives (FNs) and True Positives (TPs) was achieved for the retrained YOLOv4 with CSPDarkNet_tiny backbone model. A detailed cost analysis of the proposed system is also provided for stakeholders and policy makers.


Subject(s)
Plastics , Waste Management , Artificial Intelligence , Recycling
6.
Heliyon ; 7(11): e08405, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34841111

ABSTRACT

An escalation in the frequency and intensity of natural disasters is observed over the last decade, forcing the community to develop innovative technological solutions to reduce disaster impact. The multidisciplinary nature of disaster management suggests the collaboration between different disciplines for an efficient outcome; however, any such collaborative framework is found lacking in the literature. A common taxonomy and interpretation of disaster management related constraints are critical to develop efficient technological solutions. This article proposes a process-driven and need-oriented framework to facilitate the review of technology based contributions in disaster management. The proposed framework aims to bring technological contributions and disaster management activities in a single frame to better classify and analyse the literature. A systematic review of benchmark disruptive technology based contributions to disaster management has been performed using the proposed framework. Furthermore, a set of basic requirements and constraints at each phase of a disaster management process have been proposed and cited literature has been analysed to highlight corresponding trends. Finally, the scope of computer vision in disaster management is explored and potential activities where computer vision can be used in the future are highlighted.

7.
PLoS One ; 15(4): e0231778, 2020.
Article in English | MEDLINE | ID: mdl-32330173

ABSTRACT

Air pollution with PM2.5 (particulate matter smaller than 2.5 micro-metres in diameter) is a major health hazard in many cities worldwide, but since measuring instruments have traditionally been expensive, monitoring sites are rare and generally show only background concentrations. With the advent of low-cost, wirelessly connected sensors, air quality measurements are increasingly being made in places where many people spend time and pollution is much worse: on streets near traffic. In the interests of enabling members of the public to measure the air that they breathe, we took an open-source approach to designing a device for measuring PM2.5. Parts are relatively cheap, but of good quality and can be easily found in electronics or hardware stores, or on-line. Software is open source and the free LoRaWAN-based "The Things Network" the platform. A number of low-cost sensors we tested had problems, but those selected performed well when co-located with reference-quality instruments. A network of the devices was deployed in an urban centre, yielding valuable data for an extended time. Concentrations of PM2.5 at street level were often ten times worse than at air quality stations. The devices and network offer the opportunity for measurements in locations that concern the public.


Subject(s)
Air Pollutants/analysis , Air Pollution/prevention & control , Community Participation , Environmental Monitoring/instrumentation , Particulate Matter/analysis , Air Pollution/adverse effects , Cities , Environmental Monitoring/methods , Humans , Limit of Detection , New South Wales , Particulate Matter/adverse effects , Vehicle Emissions/analysis , Wildfires
8.
Sensors (Basel) ; 19(22)2019 Nov 16.
Article in English | MEDLINE | ID: mdl-31744161

ABSTRACT

Floods are amongst the most common and devastating of all natural hazards. The alarming number of flood-related deaths and financial losses suffered annually across the world call for improved response to flood risks. Interestingly, the last decade has presented great opportunities with a series of scholarly activities exploring how camera images and wireless sensor data from Internet-of-Things (IoT) networks can improve flood management. This paper presents a systematic review of the literature regarding IoT-based sensors and computer vision applications in flood monitoring and mapping. The paper contributes by highlighting the main computer vision techniques and IoT sensor approaches utilised in the literature for real-time flood monitoring, flood modelling, mapping and early warning systems including the estimation of water level. The paper further contributes by providing recommendations for future research. In particular, the study recommends ways in which computer vision and IoT sensor techniques can be harnessed to better monitor and manage coastal lagoons-an aspect that is under-explored in the literature.

9.
Sensors (Basel) ; 19(11)2019 Jun 10.
Article in English | MEDLINE | ID: mdl-31185660

ABSTRACT

Non-GPS localization has gained much interest from researchers and industries recently because GPS might fail to meet the accuracy requirements in shadowing environments. The two most common range-based non-GPS localization methods, namely Received Signal Strength Indicator (RSSI) and Angle-of-Arrival (AOA), have been intensively mentioned in the literature over the last decade. However, an in-depth analysis of the weighted combination methods of AOA and RSSI in shadowing environments is still missing in the state-of-the-art. This paper proposes several weighted combinations of the two RSSI and AOA components in the form of pAOA + qRSSI, devises the mathematical model for analyzing shadowing effects, and evaluates these weighted combination localization methods from both accuracy and precision perspectives. Our simulations show that increasing the number of anchors does not necessarily improve the precision and accuracy, that the AOA component is less susceptible to shadowing than the RSSI one, and that increasing the weight of the AOA component and reducing that of the RSSI component help improve the accuracy and precision at high Signal-to-Noise Ratios (SNRs). This observation suggests that some power control algorithm could be used to increase automatically the transmitted power when the channel experiences large shadowing to maintain a high SNR, thus guaranteeing both accuracy and precision of the weighted combination localization techniques.

10.
Sensors (Basel) ; 19(9)2019 May 02.
Article in English | MEDLINE | ID: mdl-31052514

ABSTRACT

The increasing development of urban centers brings serious challenges for traffic management. In this paper, we introduce a smart visual sensor, developed for a pilot project taking place in the Australian city of Liverpool (NSW). The project's aim was to design and evaluate an edge-computing device using computer vision and deep neural networks to track in real-time multi-modal transportation while ensuring citizens' privacy. The performance of the sensor was evaluated on a town center dataset. We also introduce the interoperable Agnosticity framework designed to collect, store and access data from multiple sensors, with results from two real-world experiments.

SELECTION OF CITATIONS
SEARCH DETAIL
...